近年来,人们对建立面孔和名人声音之间的关联的兴趣越来越大,从而利用YouTube的视听信息。先前的工作采用公制学习方法来学习适合关联匹配和验证任务的嵌入式空间。尽管显示出一些进展,但由于依赖距离依赖的边缘参数,运行时训练的复杂性差以及对精心制作的负面采矿程序的依赖,这种制剂是限制性的。在这项工作中,我们假设一个丰富的表示形式以及有效但有效的监督对于实现面部voice关联任务的歧视性关节嵌入空间很重要。为此,我们提出了一种轻巧的插件机制,该机制利用这两种方式中的互补线索以通过正交性约束来根据其身份标签形成丰富的融合杂物并将其簇形成。我们将我们提出的机制作为融合和正交投影(FOP)创造,并在两个流网络中实例化。在Voxceleb1和Mav-Celeb数据集上评估了总体结果框架,其中包括许多任务,包括跨模式验证和匹配。结果表明,我们的方法对当前的最新方法有利,而我们提出的监督表述比当代方法所采用的方法更有效。此外,我们还利用跨模式验证和匹配任务来分析多种语言对面部声音协会的影响。代码可用:\ url {https://github.com/msaadsaeed/fop}
translated by 谷歌翻译
零拍学习方法依赖于固定的视觉和语义嵌入,从独立视觉和语言模型中提取,都是预先培训的其他大型任务。这是当前零拍摄学习框架的弱点,因为这种不相交的嵌入不能充分将可视化和文本信息与其共享语义内容充分相关联。因此,我们建议通过在代理任务上计算带有双流网络的联合图像和文本模型来学习语义接地和丰富的视觉信息。为了改善由属性提供的图像和文本表示之间的这种对齐,我们利用辅助标题提供接地的语义信息。我们的方法,在若干基准数据集中评估了零射击学习的关节嵌入,提高了标准(APY $ + 1.6 $ \%的现有最先进方法的性能($ + 2.6 \%$在FLO)上)和AWA $ 2 $ + 2.1 \%$ 2 $ 2 $ 2美元,幼崽+ 2.2 \%$ 2。幼崽)零射击识别。
translated by 谷歌翻译
我们研究了脸部和声音之间学习协会的问题,这是最近对计算机视觉界的兴趣。现有作品采用成对或三重态损耗配方,以学习适用于相关匹配和验证任务的嵌入空间。尽管展示了一些进展,但这种损失配方由于依赖差距利润率参数,运行时训练复杂性差,以及依赖于仔细制作的负挖掘程序而受到限制。在这项工作中,我们假设具有有效且有效的监督耦合的富集的特征表示是实现改进的面部语音关联的鉴别性关节嵌入空间。为此,我们提出了一种轻量级,即插即用机制,可利用两种方式的互补线程来形成丰富的融合嵌入并通过正交限制基于其身份标签进行群集。我们将我们提出的机制硬币作为融合和正交投影(FOP),并在两条流管道中实例化。在具有多种任务的大规模VOXECEB数据集上评估总体产生的框架,包括跨模型验证和匹配。结果表明,我们的方法对目前的最先进的方法进行了有利,我们拟议的监督制定比当代方法所采用的制定更有效和效率。
translated by 谷歌翻译
When robots learn reward functions using high capacity models that take raw state directly as input, they need to both learn a representation for what matters in the task -- the task ``features" -- as well as how to combine these features into a single objective. If they try to do both at once from input designed to teach the full reward function, it is easy to end up with a representation that contains spurious correlations in the data, which fails to generalize to new settings. Instead, our ultimate goal is to enable robots to identify and isolate the causal features that people actually care about and use when they represent states and behavior. Our idea is that we can tune into this representation by asking users what behaviors they consider similar: behaviors will be similar if the features that matter are similar, even if low-level behavior is different; conversely, behaviors will be different if even one of the features that matter differs. This, in turn, is what enables the robot to disambiguate between what needs to go into the representation versus what is spurious, as well as what aspects of behavior can be compressed together versus not. The notion of learning representations based on similarity has a nice parallel in contrastive learning, a self-supervised representation learning technique that maps visually similar data points to similar embeddings, where similarity is defined by a designer through data augmentation heuristics. By contrast, in order to learn the representations that people use, so we can learn their preferences and objectives, we use their definition of similarity. In simulation as well as in a user study, we show that learning through such similarity queries leads to representations that, while far from perfect, are indeed more generalizable than self-supervised and task-input alternatives.
translated by 谷歌翻译
We address the problem of extracting key steps from unlabeled procedural videos, motivated by the potential of Augmented Reality (AR) headsets to revolutionize job training and performance. We decompose the problem into two steps: representation learning and key steps extraction. We employ self-supervised representation learning via a training strategy that adapts off-the-shelf video features using a temporal module. Training implements self-supervised learning losses involving multiple cues such as appearance, motion and pose trajectories extracted from videos to learn generalizable representations. Our method extracts key steps via a tunable algorithm that clusters the representations extracted from procedural videos. We quantitatively evaluate our approach with key step localization and also demonstrate the effectiveness of the extracted representations on related downstream tasks like phase classification. Qualitative results demonstrate that the extracted key steps are meaningful to succinctly represent the procedural tasks.
translated by 谷歌翻译
An oft-cited open problem of federated learning is the existence of data heterogeneity at the clients. One pathway to understanding the drastic accuracy drop in federated learning is by scrutinizing the behavior of the clients' deep models on data with different levels of "difficulty", which has been left unaddressed. In this paper, we investigate a different and rarely studied dimension of FL: ordered learning. Specifically, we aim to investigate how ordered learning principles can contribute to alleviating the heterogeneity effects in FL. We present theoretical analysis and conduct extensive empirical studies on the efficacy of orderings spanning three kinds of learning: curriculum, anti-curriculum, and random curriculum. We find that curriculum learning largely alleviates non-IIDness. Interestingly, the more disparate the data distributions across clients the more they benefit from ordered learning. We provide analysis explaining this phenomenon, specifically indicating how curriculum training appears to make the objective landscape progressively less convex, suggesting fast converging iterations at the beginning of the training procedure. We derive quantitative results of convergence for both convex and nonconvex objectives by modeling the curriculum training on federated devices as local SGD with locally biased stochastic gradients. Also, inspired by ordered learning, we propose a novel client selection technique that benefits from the real-world disparity in the clients. Our proposed approach to client selection has a synergic effect when applied together with ordered learning in FL.
translated by 谷歌翻译
This paper tackles the challenging problem of automating code updates to fix deprecated API usages of open source libraries by analyzing their release notes. Our system employs a three-tier architecture: first, a web crawler service retrieves deprecation documentation from the web; then a specially built parser processes those text documents into tree-structured representations; finally, a client IDE plugin locates and fixes identified deprecated usages of libraries in a given codebase. The focus of this paper in particular is the parsing component. We introduce a novel transition-based parser in two variants: based on a classical feature engineered classifier and a neural tree encoder. To confirm the effectiveness of our method, we gathered and labeled a set of 426 API deprecations from 7 well-known Python data science libraries, and demonstrated our approach decisively outperforms a non-trivial neural machine translation baseline.
translated by 谷歌翻译
Using a comprehensive sample of 2,585 bankruptcies from 1990 to 2019, we benchmark the performance of various machine learning models in predicting financial distress of publicly traded U.S. firms. We find that gradient boosted trees outperform other models in one-year-ahead forecasts. Variable permutation tests show that excess stock returns, idiosyncratic risk, and relative size are the more important variables for predictions. Textual features derived from corporate filings do not improve performance materially. In a credit competition model that accounts for the asymmetric cost of default misclassification, the survival random forest is able to capture large dollar profits.
translated by 谷歌翻译
Tensor robust principal component analysis (RPCA), which seeks to separate a low-rank tensor from its sparse corruptions, has been crucial in data science and machine learning where tensor structures are becoming more prevalent. While powerful, existing tensor RPCA algorithms can be difficult to use in practice, as their performance can be sensitive to the choice of additional hyperparameters, which are not straightforward to tune. In this paper, we describe a fast and simple self-supervised model for tensor RPCA using deep unfolding by only learning four hyperparameters. Despite its simplicity, our model expunges the need for ground truth labels while maintaining competitive or even greater performance compared to supervised deep unfolding. Furthermore, our model is capable of operating in extreme data-starved scenarios. We demonstrate these claims on a mix of synthetic data and real-world tasks, comparing performance against previously studied supervised deep unfolding methods and Bayesian optimization baselines.
translated by 谷歌翻译
Reinforcement learning can enable robots to navigate to distant goals while optimizing user-specified reward functions, including preferences for following lanes, staying on paved paths, or avoiding freshly mowed grass. However, online learning from trial-and-error for real-world robots is logistically challenging, and methods that instead can utilize existing datasets of robotic navigation data could be significantly more scalable and enable broader generalization. In this paper, we present ReViND, the first offline RL system for robotic navigation that can leverage previously collected data to optimize user-specified reward functions in the real-world. We evaluate our system for off-road navigation without any additional data collection or fine-tuning, and show that it can navigate to distant goals using only offline training from this dataset, and exhibit behaviors that qualitatively differ based on the user-specified reward function.
translated by 谷歌翻译